Goto

Collaborating Authors

 speech denoising


Listening to Sounds of Silence for Speech Denoising

Neural Information Processing Systems

We introduce a deep learning model for speech denoising, a long-standing challenge in audio analysis arising in numerous applications. Our approach is based on a key observation about human speech: there is often a short pause between each sentence or word. In a recorded speech signal, those pauses introduce a series of time periods during which only noise is present. We leverage these incidental silent intervals to learn a model for automatic speech denoising given only mono-channel audio. Detected silent intervals over time expose not just pure noise but its time-varying features, allowing the model to learn noise dynamics and suppress it from the speech signal. Experiments on multiple datasets confirm the pivotal role of silent interval detection for speech denoising, and our method outperforms several state-of-the-art denoising methods, including those that accept only audio input (like ours) and those that denoise based on audiovisual input (and hence require more information). We also show that our method enjoys excellent generalization properties, such as denoising spoken languages not seen during training.


Review for NeurIPS paper: Listening to Sounds of Silence for Speech Denoising

Neural Information Processing Systems

Additional Feedback: AFTER REBUTTAL: I would like to thank the authors for providing the additional experiments. I highly encourage the authors fo include them in the final version, it will make the paper and the general message stronger. Regarding real-world experiments, I advise the authors to launch a subjective study (MOS / ITU-T P.835 / ITU-T P.808) on real-world data to better evaluate the proposed method. Lastly, I suggest to clarify the VAD setup, I still think the results in Table 1 are a bit misleading as presented now. Adding a comment about it, will make it much better.


Listening to Sounds of Silence for Speech Denoising

Neural Information Processing Systems

We introduce a deep learning model for speech denoising, a long-standing challenge in audio analysis arising in numerous applications. Our approach is based on a key observation about human speech: there is often a short pause between each sentence or word. In a recorded speech signal, those pauses introduce a series of time periods during which only noise is present. We leverage these incidental silent intervals to learn a model for automatic speech denoising given only mono-channel audio. Detected silent intervals over time expose not just pure noise but its time-varying features, allowing the model to learn noise dynamics and suppress it from the speech signal.